翻訳と辞書
Words near each other
・ Gauss–Codazzi equations
・ Gauss–Hermite quadrature
・ Gauss–Jacobi quadrature
・ Gauss–Kronrod quadrature formula
・ Gauss–Krüger coordinate system
・ Gauss–Kuzmin distribution
・ Gauss–Kuzmin–Wirsing operator
・ Gauss–Laguerre quadrature
・ Gauss–Legendre algorithm
・ Gauss–Legendre method
・ Gauss–Lucas theorem
・ Gauss–Manin connection
・ Gauss–Markov
・ Gauss–Markov process
・ Gauss–Markov theorem
Gauss–Newton algorithm
・ Gauss–Seidel method
・ Gaustad
・ Gaustad (station)
・ Gaustad Hospital
・ Gaustadalléen (station)
・ Gaustadt
・ Gaustatoppen
・ Gausturm
・ Gausvik
・ Gausvik Church
・ Gausón
・ Gaut
・ Gautala Autramghat Sanctuary
・ Gautali River


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Gauss–Newton algorithm : ウィキペディア英語版
Gauss–Newton algorithm
The Gauss–Newton algorithm is used to solve non-linear least squares problems. It is a modification of Newton's method for finding a minimum of a function. Unlike Newton's method, the Gauss–Newton algorithm can only be used to minimize a sum of squared function values, but it has the advantage that second derivatives, which can be challenging to compute, are not required.
Non-linear least squares problems arise for instance in non-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematicians Carl Friedrich Gauss and Isaac Newton.
== Description ==
Given ''m'' functions r = (''r''1, …, ''r''''m'') (often called residuals) of ''n'' variables ''β'' = (''β''1, …, ''β''''n''), with ''m'' ≥ ''n'', the Gauss–Newton algorithm iteratively finds the minimum of the sum of squares〔Björck (1996)〕
: S(\boldsymbol \beta)= \sum_^m r_i^2(\boldsymbol \beta).
Starting with an initial guess \boldsymbol \beta^ for the minimum, the method proceeds by the iterations
: \boldsymbol \beta^ = \boldsymbol \beta^ - \left(\mathbf^\mathsf \mathbf \right)^ \mathbf ^\mathsf \mathbf(\boldsymbol \beta^)
where, if r and ''β'' are column vectors, the entries of the Jacobian matrix are
: (\mathbf)_ = \frac,
and the symbol ^\mathsf denotes the matrix transpose.
If ''m'' = ''n'', the iteration simplifies to
: \boldsymbol \beta^ = \boldsymbol \beta^ - \left( \mathbf \right)^ \mathbf(\boldsymbol \beta^)
which is a direct generalization of Newton's method in one dimension.
In data fitting, where the goal is to find the parameters ''β'' such that a given model function ''y'' = ''f''(''x'', ''β'') best fits some data points (''x''''i'', ''y''''i''), the functions ''r''''i'' are the residuals
: r_i(\boldsymbol \beta)= y_i - f(x_i, \boldsymbol \beta).
Then, the Gauss-Newton method can be expressed in terms of the Jacobian Jf of the function ''f'' as
: \boldsymbol \beta^ = \boldsymbol \beta^ + \left(\mathbf^\mathsf \mathbf \right)^ \mathbf ^\mathsf\mathbf(\boldsymbol \beta^).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Gauss–Newton algorithm」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.